# Distillation Model
Quasar 3.0 Instract V2
Quasar-3.0-7B is the distilled version of the upcoming 400B Quasar 3.0 model, showcasing the early strength and potential of the Quasar architecture.
Large Language Model
Transformers

Q
silx-ai
314
8
Monoelectra Base
Apache-2.0
A text ranking cross-encoder based on the ELECTRA architecture, designed for retrieval result reranking tasks
Text Embedding
Transformers English

M
cross-encoder
151
6
Distill Any Depth Small Hf
Apache-2.0
Distill-Any-Depth is a model for depth estimation, based on the transformers architecture, suitable for estimating depth information from images.
3D Vision
Transformers

D
keetrap
99
1
Indictrans2 Indic En Dist 200M
MIT
This is a machine translation model supporting bidirectional translation between 22 Indian languages and English, optimized using distillation techniques with a parameter size of 200M.
Machine Translation
Transformers Supports Multiple Languages

I
ai4bharat
3,123
5
MLQ Distilbart Bbc
Apache-2.0
This model is a text summarization model fine-tuned on the BBC News Summary dataset based on sshleifer/distilbart-cnn-12-6, developed by the Deep Natural Language Processing Course Lab at Politecnico di Torino.
Text Generation
Transformers

M
DeepNLP-22-23
20
0
Tinysapbert From TinyPubMedBERT V1.0
TinySapBERT is a compact biomedical entity representation model trained on the SapBERT framework, specifically designed for biomedical named entity recognition tasks.
Large Language Model
Transformers

T
dmis-lab
16.93k
0
Moco Sentencedistilbertv2.0
This is a Korean-English bilingual sentence embedding model based on sentence-transformers, which maps sentences to a 768-dimensional vector space, suitable for semantic search and clustering tasks.
Text Embedding
Transformers Supports Multiple Languages

M
bongsoo
39
1
Featured Recommended AI Models